Brain tumor detector¶
Nutshell¶
In this project I build a program that detects and localizes cancer from images of human brains, as explained on the course Modern Artificial Intelligence, lectured by Dr. Ryan Ahmed, Ph.D. MBA.
I will train two models which will
- classify the images either containing cancer tumor or not
- localizes the tumor within the brain
Introduction to the Brain Tumor Detection¶
Deep learning has proven to be as good and even better than humans in detecting diseases from X-rays, MRI scans and CT scans. there is huge potential in using AI to speed up and improve the accuracy of diagnosis. This project will use the labeled dataset from https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation which consists of 3929 Brain MRI scans and the tumor location. The final pipeline has a two step process where
- A Resnet deep learning classifier model will classify the input images into two groups: tumor detected and tumor not detected.
- For the images, where tumor was detected, a second step is performed, where a ResUNet segmentation model detects the tumor location on the pixel level.
Image segmentation¶
Image segmentation extracts information from images on the level of pixels. It is used for object recognition and localization in applications like medical imaging and self-driving cars. Image segmentation produces a pixel-wise mask of the image with deep learning approaches using common architectures such as CNN, FNNs and Deep Encoders-Decoders.
With Unet, the input and the output have the same size so the size of the images is preserved. In contrast to the CNN image classification, where the image is converted to a vector and the entire image is classified as a class label, the Unet performs classification on pixel level. Unet formulates a loss function for every pixel and then a softmax function is applied to every pixel. In other words, the segmentation problem is solved as a classification problem.
Looking into the data¶
We have a csv file that contains the patient IDs, the locations of the images, their masks and indicator if there is a tumor in the image (1 - tumor, 0 - healthy). There are 1373 images with tumors and 2556 healthy brain images. Thus, the dataset is imbalanced.
<class 'pandas.core.frame.DataFrame'> RangeIndex: 3929 entries, 0 to 3928 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 patient_id 3929 non-null object 1 image_path 3929 non-null object 2 mask_path 3929 non-null object 3 mask 3929 non-null int64 dtypes: int64(1), object(3) memory usage: 122.9+ KB
| patient_id | image_path | mask_path | mask | |
|---|---|---|---|---|
| 0 | TCGA_CS_5395_19981004 | TCGA_CS_5395_19981004/TCGA_CS_5395_19981004_1.tif | TCGA_CS_5395_19981004/TCGA_CS_5395_19981004_1_... | 0 |
| 1 | TCGA_CS_5395_19981004 | TCGA_CS_4944_20010208/TCGA_CS_4944_20010208_1.tif | TCGA_CS_4944_20010208/TCGA_CS_4944_20010208_1_... | 0 |
| 2 | TCGA_CS_5395_19981004 | TCGA_CS_4941_19960909/TCGA_CS_4941_19960909_1.tif | TCGA_CS_4941_19960909/TCGA_CS_4941_19960909_1_... | 0 |
| 3 | TCGA_CS_5395_19981004 | TCGA_CS_4943_20000902/TCGA_CS_4943_20000902_1.tif | TCGA_CS_4943_20000902/TCGA_CS_4943_20000902_1_... | 0 |
| 4 | TCGA_CS_5395_19981004 | TCGA_CS_5396_20010302/TCGA_CS_5396_20010302_1.tif | TCGA_CS_5396_20010302/TCGA_CS_5396_20010302_1_... | 0 |
Visualisation of the datasets¶
Below is an example of an MRI image and the matching mask. This example has a small tumor. In images where no tumor is present, the mask will be completely black.
Below are visualisations from 6 MRIs and their overlayed masks in rose color to get a sense of the data that I will be using in this project.
Convolutional neural networks (CNNs)¶
- The first CNN layers are used to extract high level general features
- The last couple of layers will perform classification
- Locla respective fields scan the image first searching for simple shapes such as edges and lines
- The edges are picked up by the subsequent layer to form more complex features
A good visualisation of the feature extraction with convolutions can be found at https://setosa.io/ev/image-kernels/
ResNet (Residual Network)¶
- As CNNs grow deeper, vanishing gradients negatively impact the network performance. Vanishing gradient occurs when the gradient is backpropagated to earlier layers which results in a very small gradient.
- ResNets "skip connection" feature can allow training of 152 layers without vanishing gradient problems
- ResNet adds "identity mapping on top of the CNN
- ResNet deep network is trained with ImageNet, which contains 11 million images and 11 000 categories
ResNet paper (He etal, 2015): https://arxiv.org/pdf/1512.03385
As seen in the Figure 6. from the Resnet paper, the ResNet architectures overcome the training challenges from deep networks compared ot the plain networks. ResNet-152 achieved 3.58% error rate on the ImageNet dataset. This is better than human performance.
Siddarth Das has made a great comparison of CNN architecture performances, you can check it out here: https://medium.com/analytics-vidhya/cnns-architectures-lenet-alexnet-vgg-googlenet-resnet-and-more-666091488df5
Transfer learning¶
Transfer learning retrains a network that has been trained to perform a specific task to use it in a similar task. The use of a pretrained model can drastically reduce the computational time and the amount of training data required, compared to starting from scratch. It can be compared to a salsa dancer starting to learn bachata; he/she will probably do a lot better than a person who has never danced before.
There are two main strategies in transfer learning:
- Freeze the trained CNN network weights from the first layers and the train newly added dense layers. The new layers are initialized with random weights.
- Retrain the entire CNN network while setting the learning rate to be very small. With too large learning rate the already trained weights might be changed too dramatically.
In this project I will use the approach 1.
Transfer learning has it's own challenges:
- Negative Transfer: the source task/domain is “close enough to look useful” but actually pushes the model in the wrong direction, hurting performance compared to training from scratch. This occurs when the features of old and new tasks are not related.
- Which layers to transfer / freeze: deciding what to reuse vs retrain is nontrivial; freezing too much can underfit, unfreezing too much can overfit or destabilize training.
- Representation misalignment: even if tasks are related, the internal features might not separate target classes well, especially when target cues differ (e.g., medical imaging vs natural images).
- Transfer bounds: Measuring the amount of knowledge transferred is crucial to ensure model quality and robustness. It is worth considering, how to quantify this, and it is a subject of ongoing research.
This is a great resource for transfer learning from Dipanjan Sarkar: https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a/
ResUNet¶
I will use ResUNet in the second part for the segmentation of the tumors.
- ResUNet architecture combines UNet backbone architecture and residual blocks
- The Unet uses Fully Convolutional Networks (FCN) and is adapted to perform well on segmentation tasks
- ResUNet has three parts:
- Encoder or contracting path
- Bottleneck
- Decoder or expansive path
The contraction path consists of several contraction blocks, which pass their input through res-blocks followed by 2x2 max-pooling. Feature maps after each block doubles, which helps the model learn complex features effectively.
The bottleneck part takes the input and then passes through a resblock, followed by 2x2 up-sampling convolution layers.
The decoder blocks take the up-sampled input from the previous layer and concatenates with the corresponding output features from the res-blocks in the contraction path. This is then passed through a resblock. This ensures that the features learned while contracting are used while reconstructing the image.
The final expansion layer output is passed through 1x1 convolution layer to produce the desired output with the same size as the input.
The original paper that introduced ResUNet: https://arxiv.org/pdf/1904.00592
Part 1: Training a classifier model to detect if tumor exists or not¶
I use the flow_from_dataframe for training. Batch size = 16 class mode = categorical
# @title Preparing image generators
train_generator = datagen.flow_from_dataframe(
dataframe = train,
directory = './',
x_col = 'image_path',
y_col = 'mask',
subset = 'training',
batch_size =16,
shuffle = True,
class_mode = 'categorical',
target_size = (256, 256)
)
valid_generator = datagen.flow_from_dataframe(
dataframe = train,
directory = './',
x_col = 'image_path',
y_col = 'mask',
subset = 'validation',
batch_size = 16,
shuffle = True,
class_mode = 'categorical',
target_size = (256, 256)
)
#create a data generator for test images
#no need for splitting again because here we use the "test" data set
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = datagen.flow_from_dataframe(
dataframe = test,
directory = './',
x_col = 'image_path',
y_col = 'mask',
batch_size = 16,
shuffle = False,
class_mode = 'categorical',
target_size = (256, 256)
)
Found 2839 validated image filenames belonging to 2 classes. Found 500 validated image filenames belonging to 2 classes. Found 590 validated image filenames belonging to 2 classes.
Below is the architecture of the ResNet50 model. For the transfer learning, all of these layers will be set to trainable = False to stop the weights from changing. The last layers in purple are the added layers which will be trained.
# @title Retrieve ResNet50 base model
#Input tensror 256 x 256 x 3
basemodel = ResNet50(weights = 'imagenet', include_top = False,
input_tensor = Input(shape = (256, 256, 3)))
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 94765736/94765736 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
# Add classification head to the base model
headmodel = basemodel.output
headmodel = AveragePooling2D(pool_size = (4, 4))(headmodel)
headmodel = Flatten(name = 'flatten')(headmodel)
headmodel = Dense(256, activation = 'relu')(headmodel)
headmodel = Dropout(0.3)(headmodel)
headmodel = Dense(2, activation = 'softmax')(headmodel)
fullmodel = Model(inputs = basemodel.input, outputs = headmodel)
# compile the model
fullmodel.compile(loss = 'categorical_crossentropy', optimizer='adam',
metrics=["accuracy"])
# use the early stopping to exit training
earlystopping = EarlyStopping(monitor='val_loss',
mode='min',
verbose = 1,
patience = 20)
# save the best model with least validation loss
checkpointer = ModelCheckpoint(filepath=model_base/'classifier-resnet-weights_22-12-2025.keras',
verbose=1,
save_best_only=True)
# Callbacks: logs epoch results to CSV
csv_logger = CSVLogger(
model_base/"training_history_classifier_model_22-12-2025.csv",
append=True, # keep adding if file exists
separator=',' # comma-separated
)
if train_model:
history = fullmodel.fit(train_generator,
steps_per_epoch = train_generator.n // train_generator.batch_size,
epochs=60,
validation_data=valid_generator,
validation_steps= valid_generator.n // valid_generator.batch_size,
callbacks=[checkpointer, earlystopping,csv_logger])
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
Epoch 1/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 5s/step - accuracy: 0.6593 - loss: 1.2657 Epoch 1: val_loss improved from inf to 1.52948, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 1071s 6s/step - accuracy: 0.6596 - loss: 1.2629 - val_accuracy: 0.3367 - val_loss: 1.5295 Epoch 2/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 8s 48ms/step - accuracy: 0.8750 - loss: 0.2890
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/epoch_iterator.py:116: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
Epoch 2: val_loss did not improve from 1.52948 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 0.8750 - loss: 0.2890 - val_accuracy: 0.3387 - val_loss: 1.8686 Epoch 3/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 57ms/step - accuracy: 0.7892 - loss: 0.4580 Epoch 3: val_loss improved from 1.52948 to 0.65035, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 14s 77ms/step - accuracy: 0.7893 - loss: 0.4579 - val_accuracy: 0.6593 - val_loss: 0.6504 Epoch 4/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 45ms/step - accuracy: 0.9375 - loss: 0.1734 Epoch 4: val_loss improved from 0.65035 to 0.64082, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 4s 24ms/step - accuracy: 0.9375 - loss: 0.1734 - val_accuracy: 0.6613 - val_loss: 0.6408 Epoch 5/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step - accuracy: 0.8342 - loss: 0.4006 Epoch 5: val_loss did not improve from 0.64082 177/177 ━━━━━━━━━━━━━━━━━━━━ 18s 103ms/step - accuracy: 0.8342 - loss: 0.4006 - val_accuracy: 0.6633 - val_loss: 2.8768 Epoch 6/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 0.9375 - loss: 0.2511 Epoch 6: val_loss did not improve from 0.64082 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 0.9375 - loss: 0.2511 - val_accuracy: 0.6593 - val_loss: 3.2475 Epoch 7/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 0.8630 - loss: 0.3163 Epoch 7: val_loss did not improve from 0.64082 177/177 ━━━━━━━━━━━━━━━━━━━━ 12s 66ms/step - accuracy: 0.8631 - loss: 0.3162 - val_accuracy: 0.6593 - val_loss: 1.0518 Epoch 8/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 42ms/step - accuracy: 0.7500 - loss: 0.3713 Epoch 8: val_loss did not improve from 0.64082 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - accuracy: 0.7500 - loss: 0.3713 - val_accuracy: 0.6633 - val_loss: 1.0338 Epoch 9/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 0.8790 - loss: 0.2813 Epoch 9: val_loss improved from 0.64082 to 0.61995, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.8791 - loss: 0.2813 - val_accuracy: 0.6935 - val_loss: 0.6200 Epoch 10/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 44ms/step - accuracy: 0.8750 - loss: 0.4855 Epoch 10: val_loss improved from 0.61995 to 0.60929, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 5s 25ms/step - accuracy: 0.8750 - loss: 0.4855 - val_accuracy: 0.7036 - val_loss: 0.6093 Epoch 11/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - accuracy: 0.9103 - loss: 0.2541 Epoch 11: val_loss improved from 0.60929 to 0.43979, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 20s 108ms/step - accuracy: 0.9103 - loss: 0.2541 - val_accuracy: 0.7964 - val_loss: 0.4398 Epoch 12/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 45ms/step - accuracy: 1.0000 - loss: 0.0712 Epoch 12: val_loss improved from 0.43979 to 0.42589, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 4s 20ms/step - accuracy: 1.0000 - loss: 0.0712 - val_accuracy: 0.8145 - val_loss: 0.4259 Epoch 13/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 94ms/step - accuracy: 0.9239 - loss: 0.2116 Epoch 13: val_loss improved from 0.42589 to 0.27524, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 20s 114ms/step - accuracy: 0.9239 - loss: 0.2115 - val_accuracy: 0.8911 - val_loss: 0.2752 Epoch 14/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 42ms/step - accuracy: 0.9375 - loss: 0.1121 Epoch 14: val_loss did not improve from 0.27524 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 0.9375 - loss: 0.1121 - val_accuracy: 0.8851 - val_loss: 0.2856 Epoch 15/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - accuracy: 0.9442 - loss: 0.1573 Epoch 15: val_loss improved from 0.27524 to 0.26114, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 19s 107ms/step - accuracy: 0.9442 - loss: 0.1573 - val_accuracy: 0.9032 - val_loss: 0.2611 Epoch 16/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 0.9375 - loss: 0.1289 Epoch 16: val_loss improved from 0.26114 to 0.26042, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 4s 24ms/step - accuracy: 0.9375 - loss: 0.1289 - val_accuracy: 0.9073 - val_loss: 0.2604 Epoch 17/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 92ms/step - accuracy: 0.9408 - loss: 0.1535 Epoch 17: val_loss improved from 0.26042 to 0.18035, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 20s 112ms/step - accuracy: 0.9409 - loss: 0.1534 - val_accuracy: 0.9415 - val_loss: 0.1803 Epoch 18/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 42ms/step - accuracy: 1.0000 - loss: 0.0220 Epoch 18: val_loss did not improve from 0.18035 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - accuracy: 1.0000 - loss: 0.0220 - val_accuracy: 0.9395 - val_loss: 0.1892 Epoch 19/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 89ms/step - accuracy: 0.9618 - loss: 0.1116 Epoch 19: val_loss improved from 0.18035 to 0.14857, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 19s 109ms/step - accuracy: 0.9618 - loss: 0.1116 - val_accuracy: 0.9395 - val_loss: 0.1486 Epoch 20/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 0.9375 - loss: 0.0897 Epoch 20: val_loss improved from 0.14857 to 0.14670, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/classifier-resnet-weights_22-12-2025.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 19ms/step - accuracy: 0.9375 - loss: 0.0897 - val_accuracy: 0.9395 - val_loss: 0.1467 Epoch 21/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step - accuracy: 0.9781 - loss: 0.0578 Epoch 21: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 18s 103ms/step - accuracy: 0.9780 - loss: 0.0580 - val_accuracy: 0.8105 - val_loss: 0.4949 Epoch 22/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 0.9375 - loss: 0.0745 Epoch 22: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 0.9375 - loss: 0.0745 - val_accuracy: 0.8145 - val_loss: 0.5055 Epoch 23/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 0.9600 - loss: 0.1070 Epoch 23: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 14s 79ms/step - accuracy: 0.9599 - loss: 0.1072 - val_accuracy: 0.6552 - val_loss: 29.7710 Epoch 24/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 1.0000 - loss: 0.1288 Epoch 24: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - accuracy: 1.0000 - loss: 0.1288 - val_accuracy: 0.6573 - val_loss: 30.7215 Epoch 25/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 0.9227 - loss: 0.2066 Epoch 25: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 12s 65ms/step - accuracy: 0.9227 - loss: 0.2066 - val_accuracy: 0.8790 - val_loss: 1.6968 Epoch 26/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 0.9375 - loss: 0.4549 Epoch 26: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - accuracy: 0.9375 - loss: 0.4549 - val_accuracy: 0.8589 - val_loss: 2.0143 Epoch 27/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 0.9463 - loss: 0.1448 Epoch 27: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 11s 64ms/step - accuracy: 0.9463 - loss: 0.1448 - val_accuracy: 0.9274 - val_loss: 0.2727 Epoch 28/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 6s 38ms/step - accuracy: 0.9375 - loss: 0.1148 Epoch 28: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 0.9375 - loss: 0.1148 - val_accuracy: 0.9234 - val_loss: 0.2679 Epoch 29/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 0.9645 - loss: 0.0904 Epoch 29: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 11s 64ms/step - accuracy: 0.9645 - loss: 0.0906 - val_accuracy: 0.7742 - val_loss: 1.2347 Epoch 30/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 6s 38ms/step - accuracy: 0.9375 - loss: 0.1273 Epoch 30: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - accuracy: 0.9375 - loss: 0.1273 - val_accuracy: 0.7903 - val_loss: 1.2631 Epoch 31/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy: 0.9619 - loss: 0.1006 Epoch 31: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 12s 68ms/step - accuracy: 0.9619 - loss: 0.1007 - val_accuracy: 0.9214 - val_loss: 0.2919 Epoch 32/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 0.9375 - loss: 0.3318 Epoch 32: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 0.9375 - loss: 0.3318 - val_accuracy: 0.9153 - val_loss: 0.2921 Epoch 33/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 0.9603 - loss: 0.1285 Epoch 33: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 12s 65ms/step - accuracy: 0.9602 - loss: 0.1288 - val_accuracy: 0.8911 - val_loss: 0.4200 Epoch 34/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 6s 38ms/step - accuracy: 0.9375 - loss: 0.3122 Epoch 34: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 0.9375 - loss: 0.3122 - val_accuracy: 0.8911 - val_loss: 0.4037 Epoch 35/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 0.9632 - loss: 0.0953 Epoch 35: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 12s 65ms/step - accuracy: 0.9632 - loss: 0.0953 - val_accuracy: 0.9415 - val_loss: 0.1992 Epoch 36/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 6s 39ms/step - accuracy: 1.0000 - loss: 0.0069 Epoch 36: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 1.0000 - loss: 0.0069 - val_accuracy: 0.9456 - val_loss: 0.1761 Epoch 37/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 0.9762 - loss: 0.0785 Epoch 37: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 11s 65ms/step - accuracy: 0.9762 - loss: 0.0785 - val_accuracy: 0.9355 - val_loss: 0.2020 Epoch 38/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 6s 38ms/step - accuracy: 1.0000 - loss: 0.0051 Epoch 38: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 1.0000 - loss: 0.0051 - val_accuracy: 0.9355 - val_loss: 0.1975 Epoch 39/60 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 0.9759 - loss: 0.0767 Epoch 39: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 11s 64ms/step - accuracy: 0.9759 - loss: 0.0769 - val_accuracy: 0.8891 - val_loss: 0.2712 Epoch 40/60 1/177 ━━━━━━━━━━━━━━━━━━━━ 7s 43ms/step - accuracy: 1.0000 - loss: 0.0333 Epoch 40: val_loss did not improve from 0.14670 177/177 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - accuracy: 1.0000 - loss: 0.0333 - val_accuracy: 0.8931 - val_loss: 0.2635 Epoch 40: early stopping
The best model was reached with 20 epochs, after which the training was terminated by early stopping (patience=20). Interestingly, the validation loss had one massive peak.
Assess classifier model performance¶
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
37/37 ━━━━━━━━━━━━━━━━━━━━ 184s 5s/step
The model accuracy is 0.94
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred, labels = [0,1]))
precision recall f1-score support
0 0.93 0.98 0.95 366
1 0.97 0.88 0.92 224
micro avg 0.94 0.94 0.94 590
macro avg 0.95 0.93 0.94 590
weighted avg 0.94 0.94 0.94 590
Part 2: Building a segmentation model to localise tumors¶
(1373, 4)
Saved to: /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/segmentation_splits.json
def resblock(X, f):
# make a copy of input
X_copy = X
# main path
# Read more about he_normal: https://medium.com/@prateekvishnu/xavier-and-he-normal-he-et-al-initialization-8e3d7a087528
X = Conv2D(f, kernel_size = (1,1) ,strides = (1,1),kernel_initializer ='he_normal')(X)
X = BatchNormalization()(X)
X = Activation('relu')(X)
X = Conv2D(f, kernel_size = (3,3), strides =(1,1), padding = 'same', kernel_initializer ='he_normal')(X)
X = BatchNormalization()(X)
# Short path
# Read more here: https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33
X_copy = Conv2D(f, kernel_size = (1,1), strides =(1,1), kernel_initializer ='he_normal')(X_copy)
X_copy = BatchNormalization()(X_copy)
# Adding the output from main path and short path together
X = Add()([X,X_copy])
X = Activation('relu')(X)
return X
# function to upscale and concatenate the values passed
def upsample_concat(x, skip):
x = UpSampling2D((2,2))(x)
merge = Concatenate()([x, skip])
return merge
input_shape = (256,256,3)
# Input tensor shape
X_input = Input(input_shape)
# Stage 1
conv1_in = Conv2D(16,3,activation= 'relu', padding = 'same', kernel_initializer ='he_normal')(X_input)
conv1_in = BatchNormalization()(conv1_in)
conv1_in = Conv2D(16,3,activation= 'relu', padding = 'same', kernel_initializer ='he_normal')(conv1_in)
conv1_in = BatchNormalization()(conv1_in)
pool_1 = MaxPool2D(pool_size = (2,2))(conv1_in)
# Stage 2
conv2_in = resblock(pool_1, 32)
pool_2 = MaxPool2D(pool_size = (2,2))(conv2_in)
# Stage 3
conv3_in = resblock(pool_2, 64)
pool_3 = MaxPool2D(pool_size = (2,2))(conv3_in)
# Stage 4
conv4_in = resblock(pool_3, 128)
pool_4 = MaxPool2D(pool_size = (2,2))(conv4_in)
# Stage 5 (Bottle Neck)
conv5_in = resblock(pool_4, 256)
# Upscale stage 1
up_1 = upsample_concat(conv5_in, conv4_in)
up_1 = resblock(up_1, 128)
# Upscale stage 2
up_2 = upsample_concat(up_1, conv3_in)
up_2 = resblock(up_2, 64)
# Upscale stage 3
up_3 = upsample_concat(up_2, conv2_in)
up_3 = resblock(up_3, 32)
# Upscale stage 4
up_4 = upsample_concat(up_3, conv1_in)
up_4 = resblock(up_4, 16)
# Final Output
output = Conv2D(1, (1,1), padding = "same", activation = "sigmoid")(up_4)
model_seg = Model(inputs = X_input, outputs = output )
# @title Compiling the segmentation model
from utilities import tversky, tversky_loss, focal_tversky
adam = keras.optimizers.Adam(learning_rate = 0.05, epsilon = 0.1)
model_seg.compile(optimizer = adam, loss = focal_tversky, metrics = [tversky])
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
Epoch 1/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 9s/step - loss: 0.8267 - tversky: 0.2203
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
Epoch 1: val_loss improved from inf to 0.59689, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 760s 10s/step - loss: 0.8249 - tversky: 0.2224 - val_loss: 0.5969 - val_tversky: 0.4970 Epoch 2/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 259ms/step - loss: 0.4331 - tversky: 0.6690 Epoch 2: val_loss did not improve from 0.59689 72/72 ━━━━━━━━━━━━━━━━━━━━ 22s 304ms/step - loss: 0.4330 - tversky: 0.6691 - val_loss: 0.6728 - val_tversky: 0.4079 Epoch 3/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.3882 - tversky: 0.7145 Epoch 3: val_loss improved from 0.59689 to 0.36208, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 168ms/step - loss: 0.3879 - tversky: 0.7147 - val_loss: 0.3621 - val_tversky: 0.7410 Epoch 4/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.3287 - tversky: 0.7716 Epoch 4: val_loss did not improve from 0.36208 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.3287 - tversky: 0.7716 - val_loss: 0.3912 - val_tversky: 0.7129 Epoch 5/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.2861 - tversky: 0.8098 Epoch 5: val_loss improved from 0.36208 to 0.34319, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 166ms/step - loss: 0.2861 - tversky: 0.8099 - val_loss: 0.3432 - val_tversky: 0.7583 Epoch 6/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.2626 - tversky: 0.8300 Epoch 6: val_loss improved from 0.34319 to 0.32901, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 165ms/step - loss: 0.2627 - tversky: 0.8299 - val_loss: 0.3290 - val_tversky: 0.7722 Epoch 7/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.2603 - tversky: 0.8322 Epoch 7: val_loss improved from 0.32901 to 0.26115, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 166ms/step - loss: 0.2603 - tversky: 0.8322 - val_loss: 0.2612 - val_tversky: 0.8314 Epoch 8/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.2594 - tversky: 0.8330 Epoch 8: val_loss did not improve from 0.26115 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.2593 - tversky: 0.8331 - val_loss: 0.3317 - val_tversky: 0.7696 Epoch 9/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.2281 - tversky: 0.8595 Epoch 9: val_loss did not improve from 0.26115 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.2281 - tversky: 0.8595 - val_loss: 0.2777 - val_tversky: 0.8169 Epoch 10/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 150ms/step - loss: 0.2070 - tversky: 0.8770 Epoch 10: val_loss did not improve from 0.26115 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 163ms/step - loss: 0.2070 - tversky: 0.8770 - val_loss: 0.3404 - val_tversky: 0.7616 Epoch 11/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 142ms/step - loss: 0.2093 - tversky: 0.8744 Epoch 11: val_loss improved from 0.26115 to 0.24138, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.2092 - tversky: 0.8744 - val_loss: 0.2414 - val_tversky: 0.8488 Epoch 12/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.1751 - tversky: 0.9015 Epoch 12: val_loss improved from 0.24138 to 0.23228, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 165ms/step - loss: 0.1752 - tversky: 0.9015 - val_loss: 0.2323 - val_tversky: 0.8565 Epoch 13/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1897 - tversky: 0.8903 Epoch 13: val_loss improved from 0.23228 to 0.20230, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 164ms/step - loss: 0.1897 - tversky: 0.8903 - val_loss: 0.2023 - val_tversky: 0.8811 Epoch 14/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.1750 - tversky: 0.9015 Epoch 14: val_loss improved from 0.20230 to 0.20201, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 167ms/step - loss: 0.1750 - tversky: 0.9015 - val_loss: 0.2020 - val_tversky: 0.8810 Epoch 15/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1699 - tversky: 0.9054 Epoch 15: val_loss improved from 0.20201 to 0.20099, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 169ms/step - loss: 0.1699 - tversky: 0.9054 - val_loss: 0.2010 - val_tversky: 0.8819 Epoch 16/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1602 - tversky: 0.9125 Epoch 16: val_loss improved from 0.20099 to 0.19495, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 162ms/step - loss: 0.1603 - tversky: 0.9125 - val_loss: 0.1949 - val_tversky: 0.8864 Epoch 17/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.1570 - tversky: 0.9150 Epoch 17: val_loss did not improve from 0.19495 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 160ms/step - loss: 0.1570 - tversky: 0.9150 - val_loss: 0.2004 - val_tversky: 0.8823 Epoch 18/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1478 - tversky: 0.9213 Epoch 18: val_loss improved from 0.19495 to 0.19305, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 163ms/step - loss: 0.1478 - tversky: 0.9212 - val_loss: 0.1931 - val_tversky: 0.8875 Epoch 19/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 151ms/step - loss: 0.1435 - tversky: 0.9245 Epoch 19: val_loss improved from 0.19305 to 0.18707, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 170ms/step - loss: 0.1435 - tversky: 0.9245 - val_loss: 0.1871 - val_tversky: 0.8928 Epoch 20/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 149ms/step - loss: 0.1428 - tversky: 0.9250 Epoch 20: val_loss improved from 0.18707 to 0.18585, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 167ms/step - loss: 0.1428 - tversky: 0.9250 - val_loss: 0.1859 - val_tversky: 0.8938 Epoch 21/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 149ms/step - loss: 0.1362 - tversky: 0.9296 Epoch 21: val_loss did not improve from 0.18585 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 162ms/step - loss: 0.1363 - tversky: 0.9296 - val_loss: 0.1887 - val_tversky: 0.8909 Epoch 22/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1324 - tversky: 0.9322 Epoch 22: val_loss did not improve from 0.18585 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1325 - tversky: 0.9321 - val_loss: 0.2548 - val_tversky: 0.8371 Epoch 23/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1386 - tversky: 0.9278 Epoch 23: val_loss improved from 0.18585 to 0.18294, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.1386 - tversky: 0.9278 - val_loss: 0.1829 - val_tversky: 0.8961 Epoch 24/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 150ms/step - loss: 0.1323 - tversky: 0.9322 Epoch 24: val_loss did not improve from 0.18294 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 163ms/step - loss: 0.1323 - tversky: 0.9322 - val_loss: 0.2016 - val_tversky: 0.8809 Epoch 25/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.1302 - tversky: 0.9336 Epoch 25: val_loss improved from 0.18294 to 0.17648, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 165ms/step - loss: 0.1302 - tversky: 0.9337 - val_loss: 0.1765 - val_tversky: 0.9006 Epoch 26/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 151ms/step - loss: 0.1222 - tversky: 0.9391 Epoch 26: val_loss improved from 0.17648 to 0.16939, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 170ms/step - loss: 0.1222 - tversky: 0.9391 - val_loss: 0.1694 - val_tversky: 0.9062 Epoch 27/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.1144 - tversky: 0.9442 Epoch 27: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 160ms/step - loss: 0.1145 - tversky: 0.9442 - val_loss: 0.1742 - val_tversky: 0.9026 Epoch 28/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1111 - tversky: 0.9464 Epoch 28: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.1112 - tversky: 0.9464 - val_loss: 0.1856 - val_tversky: 0.8932 Epoch 29/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 141ms/step - loss: 0.1145 - tversky: 0.9442 Epoch 29: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 153ms/step - loss: 0.1146 - tversky: 0.9442 - val_loss: 0.1869 - val_tversky: 0.8929 Epoch 30/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1109 - tversky: 0.9464 Epoch 30: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.1109 - tversky: 0.9464 - val_loss: 0.1718 - val_tversky: 0.9039 Epoch 31/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1127 - tversky: 0.9453 Epoch 31: val_loss improved from 0.16939 to 0.16876, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 162ms/step - loss: 0.1127 - tversky: 0.9453 - val_loss: 0.1688 - val_tversky: 0.9060 Epoch 32/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.1066 - tversky: 0.9492 Epoch 32: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.1066 - tversky: 0.9492 - val_loss: 0.1761 - val_tversky: 0.9008 Epoch 33/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1026 - tversky: 0.9518 Epoch 33: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.1027 - tversky: 0.9518 - val_loss: 0.1950 - val_tversky: 0.8855 Epoch 34/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1071 - tversky: 0.9488 Epoch 34: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1072 - tversky: 0.9488 - val_loss: 0.1735 - val_tversky: 0.9030 Epoch 35/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1063 - tversky: 0.9494 Epoch 35: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.1063 - tversky: 0.9494 - val_loss: 0.1840 - val_tversky: 0.8947 Epoch 36/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 150ms/step - loss: 0.1029 - tversky: 0.9516 Epoch 36: val_loss improved from 0.16876 to 0.16731, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 168ms/step - loss: 0.1029 - tversky: 0.9516 - val_loss: 0.1673 - val_tversky: 0.9076 Epoch 37/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1047 - tversky: 0.9505 Epoch 37: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1047 - tversky: 0.9505 - val_loss: 0.1726 - val_tversky: 0.9034 Epoch 38/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0948 - tversky: 0.9567 Epoch 38: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0948 - tversky: 0.9566 - val_loss: 0.1846 - val_tversky: 0.8945 Epoch 39/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0965 - tversky: 0.9555 Epoch 39: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0965 - tversky: 0.9556 - val_loss: 0.1735 - val_tversky: 0.9030 Epoch 40/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1000 - tversky: 0.9534 Epoch 40: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1000 - tversky: 0.9534 - val_loss: 0.1733 - val_tversky: 0.9028 Epoch 41/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0919 - tversky: 0.9584 Epoch 41: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0920 - tversky: 0.9584 - val_loss: 0.1705 - val_tversky: 0.9054 Epoch 42/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0925 - tversky: 0.9580 Epoch 42: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0925 - tversky: 0.9580 - val_loss: 0.1703 - val_tversky: 0.9047 Epoch 43/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1006 - tversky: 0.9529 Epoch 43: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 155ms/step - loss: 0.1005 - tversky: 0.9529 - val_loss: 0.1851 - val_tversky: 0.8939 Epoch 44/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0894 - tversky: 0.9598 Epoch 44: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 159ms/step - loss: 0.0894 - tversky: 0.9598 - val_loss: 0.1742 - val_tversky: 0.9024 Epoch 45/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0879 - tversky: 0.9608 Epoch 45: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0879 - tversky: 0.9607 - val_loss: 0.1711 - val_tversky: 0.9047 Epoch 46/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 142ms/step - loss: 0.0885 - tversky: 0.9604 Epoch 46: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 155ms/step - loss: 0.0885 - tversky: 0.9604 - val_loss: 0.1731 - val_tversky: 0.9030 Epoch 47/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0867 - tversky: 0.9615 Epoch 47: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0867 - tversky: 0.9615 - val_loss: 0.1741 - val_tversky: 0.9024 Epoch 48/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0846 - tversky: 0.9628 Epoch 48: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0846 - tversky: 0.9627 - val_loss: 0.1884 - val_tversky: 0.8911 Epoch 49/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0890 - tversky: 0.9600 Epoch 49: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 159ms/step - loss: 0.0890 - tversky: 0.9600 - val_loss: 0.1750 - val_tversky: 0.9016 Epoch 50/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0835 - tversky: 0.9634 Epoch 50: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0835 - tversky: 0.9634 - val_loss: 0.1840 - val_tversky: 0.8951 Epoch 51/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.0813 - tversky: 0.9646 Epoch 51: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0814 - tversky: 0.9646 - val_loss: 0.1725 - val_tversky: 0.9029 Epoch 52/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0816 - tversky: 0.9645 Epoch 52: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0816 - tversky: 0.9645 - val_loss: 0.1810 - val_tversky: 0.8973 Epoch 53/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0837 - tversky: 0.9633 Epoch 53: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0837 - tversky: 0.9633 - val_loss: 0.1816 - val_tversky: 0.8970 Epoch 54/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0798 - tversky: 0.9655 Epoch 54: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0798 - tversky: 0.9655 - val_loss: 0.1885 - val_tversky: 0.8917 Epoch 55/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0798 - tversky: 0.9656 Epoch 55: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0798 - tversky: 0.9655 - val_loss: 0.1827 - val_tversky: 0.8961 Epoch 56/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0826 - tversky: 0.9639 Epoch 56: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0826 - tversky: 0.9639 - val_loss: 0.1789 - val_tversky: 0.8989 Epoch 56: early stopping
The best model was reached on epoch 36. After that the validation loss did not improve and early stopping rule was reached (patience=20).
if not retrain_model:
df_his = pd.read_csv(model_base/"training_history_segmodel_22-12-2025.csv", sep=None, engine="python")
df_his = df_his.apply(pd.to_numeric, errors="coerce")
history_data = df_his.to_dict(orient="list")
else:
history_data = history.history # Keras History object
# ---- plot ----
out_path = image_base/"seg_model_train_history.png"
fig, ax = plt.subplots(figsize=(5, 3))
ax.plot(history_data["loss"], label="train_loss")
ax.plot(history_data["val_loss"], label="val_loss")
ax.set_title("Segmentation Model Loss")
ax.set_ylabel("Loss")
ax.plot(history['accuracy'])
ax.plot(history['val_accuracy'])
ax.set_xlabel("Epoch")
ax.legend(['train_loss','val_loss','train_accuracy','val_accuracy'], loc="center right")
fig.tight_layout()
fig.savefig(out_path, dpi=200, bbox_inches="tight", transparent=True)
plt.close(fig)
display(Image(filename=out_path,width=460))
Assessing the trained segmentation model performance¶
To assess the performance, the predicted and actual masks of 10 test cases are printed below. The model has not seen this data before.